Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pull Resquest regarding [Good First Issue]: Support quantized::conv1d, quantized::linear_relu #29343 #29948

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

VaishnaviOnPC
Copy link

Making a Conv1d and Conv1d_ReLu to support Pytorch Frontend

Comment on lines 91 to 137
OutputVector translate_quantized_conv1d(const NodeContext& context) {
// "quantized::conv1d.new(Tensor qx, __torch__.torch.classes.quantized.Conv1dPackedParamsBase packed_weight,
// float output_scale, int output_zero_point) -> Tensor"
num_inputs_check(context, 4, 4);

auto input = context.get_input(0);

// (N, C, L) -> (N, C, L, 1)
auto unsqueeze_axis = ov::op::v0::Constant::create(element::i64, Shape{1}, {3});
auto unsqueezed_input = context.mark_node(std::make_shared<ov::op::v0::Unsqueeze>(input, unsqueeze_axis));

// Lower as if it's 2D conv with singleton spatial dim
auto conv = translate_quantized_convnd_base(context, unsqueezed_input);

// (N, C, L, 1) -> (N, C, L)
auto squeeze_axis = ov::op::v0::Constant::create(element::i64, Shape{1}, {3});
auto squeezed_output = context.mark_node(std::make_shared<ov::op::v0::Squeeze>(conv, squeeze_axis));

auto scale = context.get_input(2);
auto zero_point = context.get_input(3);

return {quantize(context, squeezed_output, scale, zero_point, input)};
}

OutputVector translate_quantized_conv1d_relu(const NodeContext& context) {
num_inputs_check(context, 4, 4);

auto input = context.get_input(0);
auto unsqueeze_axis = ov::op::v0::Constant::create(element::i64, Shape{1}, {3});
auto unsqueezed_input = context.mark_node(std::make_shared<ov::op::v0::Unsqueeze>(input, unsqueeze_axis));

NodeContext new_context = context;
new_context.m_inputs[0] = unsqueezed_input;

auto conv = translate_quantized_convnd_base(new_context);
auto relu = context.mark_node(std::make_shared<v0::Relu>(conv));

auto squeeze_axis = ov::op::v0::Constant::create(element::i64, Shape{1}, {3});
auto squeezed_output = context.mark_node(std::make_shared<ov::op::v0::Squeeze>(relu, squeeze_axis));

auto scale = context.get_input(2);
auto zero_point = context.get_input(3);

return {quantize(context, squeezed_output, scale, zero_point, input)};
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have convnd versions. Please use them, if they cannot support your case, modify them

@mvafin
Copy link
Contributor

mvafin commented Apr 7, 2025

Well... You at least need to add an operation in op_table.cpp in the list of supported operations.

@VaishnaviOnPC VaishnaviOnPC requested a review from mvafin April 7, 2025 16:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: PyTorch FE OpenVINO PyTorch Frontend ExternalPR External contributor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants